Deep neural networks are vulnerable to adversarial attacks. Ideally, a robust model shall perform well on both the perturbed training data and the unseen perturbed test data. It is found empirically that fitting perturbed training data is not hard, but generalizing to perturbed test data is quite difficult. To better understand adversarial generalization, it is of great interest to study the adversarial Rademacher complexity (ARC) of deep neural networks. However, how to bound ARC in multi-layers cases is largely unclear due to the difficulty of analyzing adversarial loss in the definition of ARC. There have been two types of attempts of ARC. One is to provide the upper bound of ARC in linear and one-hidden layer cases. However, these approaches seem hard to extend to multi-layer cases. Another is to modify the adversarial loss and provide upper bounds of Rademacher complexity on such surrogate loss in multi-layer cases. However, such variants of Rademacher complexity are not guaranteed to be bounds for meaningful robust generalization gaps (RGG). In this paper, we provide a solution to this unsolved problem. Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks. Our approach is based on covering numbers. We provide a method to handle the robustify function classes of DNNs such that we can calculate the covering numbers. Finally, we provide experiments to study the empirical implication of our bounds and provide an analysis of poor adversarial generalization.
translated by 谷歌翻译
自Reddi等人以来。 2018年指出了亚当的分歧问题,已经设计了许多新变体以获得融合。但是,香草·亚当(Vanilla Adam)仍然非常受欢迎,并且在实践中效果很好。为什么理论和实践之间存在差距?我们指出,理论和实践的设置之间存在不匹配:Reddi等。 2018年选择亚当的超参数后选择问题,即$(\ beta_1,\ beta_2)$;虽然实际应用通常首先解决问题,然后调整$(\ beta_1,\ beta_2)$。由于这一观察,我们猜想只有当我们改变选择问题和超参数的顺序时,理论上的经验收敛才能是合理的。在这项工作中,我们确认了这一猜想。我们证明,当$ \ beta_2 $很大时,$ \ beta_1 <\ sqrt {\ beta_2} <1 $,Adam收集到关键点附近。邻居的大小是随机梯度方差的命题。在额外的条件(强烈生长条件)下,亚当收敛到关键点。随着$ \ beta_2 $的增加,我们的收敛结果可以覆盖[0,1)$中的任何$ \ beta_1 \,包括$ \ beta_1 = 0.9 $,这是深度学习库中的默认设置。我们的结果表明,亚当可以在广泛的超参数下收敛,而无需对其更新规则进行任何修改。据我们所知,我们是第一个证明这一结果的人,而没有强有力的假设,例如有限梯度。当$ \ beta_2 $很小时,我们进一步指出了一个$(\ beta_1,\ beta_2)$的大区域,亚当可以在其中偏离无限。我们的差异结果考虑与我们的收敛结果相同的设置,表明在增加$ \ beta_2 $时从差异到收敛的相变。这些正面和负面的结果可以提供有关如何调整亚当超级参数的建议。
translated by 谷歌翻译
最近,对分布(OOD)数据具有相关性转移的概括引起了极大的关注。相关转移是由与类标签相关的虚假属性引起的,因为它们之间的相关性可能在训练和测试数据中有所不同。对于这样一个问题,我们表明,鉴于类标签,有条件独立的虚假属性模型是可推广的。基于此,提出了控制OOD泛化误差的度量条件伪变异(CSV),以衡量这种条件独立性。为了改善OOD的概括,我们将培训过程正常使用拟议的CSV。在温和的假设下,我们的训练目标可以作为非Convex-Concave Mini-Max问题提出。提出了具有可证明的收敛速率的算法来解决该问题。广泛的经验结果验证了我们算法在改善OOD概括方面的功效。
translated by 谷歌翻译
机器的图像编码(ICM)旨在压缩图像进行AI任务分析,而不是满足人类的看法。学习一种既是一般(用于AI任务)的特征,也是紧凑的(用于压缩)的功能,这对于其成功而言至关重要。在本文中,我们试图通过学习通用功能,同时考虑压缩来开发ICM框架。我们将诸如无所不能功能和相应框架的功能命名为Omni-ICM。考虑到自我监督学习(SSL)提高了特征的概括,我们将其与压缩任务集成到OMNI-ICM框架中,以学习无所不能的功能。但是,在SSL中协调语义建模并在压缩中删除冗余是不平凡的,因此我们通过合作实例区分和熵最小化以自适应掉落的信息来设计新颖的信息过滤(如果)模块,以较弱相关的信息执行AI任务(例如,某些纹理冗余)。与以前的特定解决方案不同,Omni-ICM可以直接基于学习的无能功能的AI任务分析,而无需联合培训或额外的转换。尽管简单而直观,但Omni-ICM在多个基本愿景任务上大大优于现有的传统和基于学习的编解码器。
translated by 谷歌翻译
差异隐私(DP)是保留隐私的基本技术。有发现,用于隐私保留的大型模型比较较小的模型更糟糕(例如,RESET50比RENET18更糟糕)。为了更好地理解这种现象,我们从泛化的观点来看高维DP学习。从理论上讲,对于简单的高斯模型具有甚至小的DP噪声,如果维度足够大,则分类错误可以像随机猜测一样糟糕。然后,我们提出了一个特征选择方法,以减少模型的大小,基于新的指标,它交易分类准确性和隐私保留。实验对真实数据支持我们的理论结果,并证明了所提出的方法的优势。
translated by 谷歌翻译
在许多机器学习应用程序中出现了非convex-concave min-max问题,包括最大程度地减少一组非凸函数的最大程度,并对神经网络的强大对抗训练。解决此问题的一种流行方法是梯度下降(GDA)算法,不幸的是,在非凸性的情况下可以表现出振荡。在本文中,我们引入了一种“平滑”方案,该方案可以与GDA结合以稳定振荡并确保收敛到固定溶液。我们证明,稳定的GDA算法可以实现$ O(1/\ epsilon^2)$迭代复杂性,以最大程度地减少有限的非convex函数收集的最大值。此外,平滑的GDA算法达到了$ O(1/\ epsilon^4)$ toseration复杂性,用于一般的nonconvex-concave问题。提出了这种稳定的GDA算法的扩展到多块情况。据我们所知,这是第一个实现$ o(1/\ epsilon^2)$的算法,用于一类NonConvex-Concave问题。我们说明了稳定的GDA算法在健壮训练中的实际效率。
translated by 谷歌翻译
Marine waves significantly disturb the unmanned surface vehicle (USV) motion. An unmanned aerial vehicle (UAV) can hardly land on a USV that undergoes irregular motion. An oversized landing platform is usually necessary to guarantee the landing safety, which limits the number of UAVs that can be carried. We propose a landing system assisted by tether and robot manipulation. The system can land multiple UAVs without increasing the USV's size. An MPC controller stabilizes the end-effector and tracks the UAVs, and an adaptive estimator addresses the disturbance caused by the base motion. The working strategy of the system is designed to plan the motion of each device. We have validated the manipulator controller through simulations and well-controlled indoor experiments. During the field tests, the proposed system caught and placed the UAVs when the disturbed USV roll range was approximately 12 degrees.
translated by 谷歌翻译
Image-text retrieval in remote sensing aims to provide flexible information for data analysis and application. In recent years, state-of-the-art methods are dedicated to ``scale decoupling'' and ``semantic decoupling'' strategies to further enhance the capability of representation. However, these previous approaches focus on either the disentangling scale or semantics but ignore merging these two ideas in a union model, which extremely limits the performance of cross-modal retrieval models. To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval. Specifically, we design the Bidirectional Scale Decoupling (BSD) module, which exploits Salience Feature Extraction (SFE) and Salience-Guided Suppression (SGS) units to adaptively extract potential features and suppress cumbersome features at other scales in a bidirectional pattern to yield different scale clues. Besides, we design the Label-supervised Semantic Decoupling (LSD) module by leveraging the category semantic labels as prior knowledge to supervise images and texts probing significant semantic-related information. Finally, we design a Semantic-guided Triple Loss (STL), which adaptively generates a constant to adjust the loss function to improve the probability of matching the same semantic image and text and shorten the convergence time of the retrieval model. Our proposed SSJDN outperforms state-of-the-art approaches in numerical experiments conducted on four benchmark remote sensing datasets.
translated by 谷歌翻译
组合多个传感器使机器人能够最大程度地提高其对环境的感知意识,并增强其对外部干扰的鲁棒性,对机器人导航至关重要。本文提出了可融合的基准测试,这是一个完整的多传感器数据集,具有多种移动机器人序列。本文提出了三项贡献。我们首先推进便携式和通用的多传感器套件,可提供丰富的感官测量值:10Hz激光镜点云,20Hz立体声框架图像,来自立体声事件相机的高速率和异步事件,来自IMU的200Hz惯性读数以及10Hz GPS信号。传感器已经在硬件中暂时同步。该设备轻巧,独立,并为移动机器人提供插件支持。其次,我们通过收集17个序列来构建数据集,该序列通过利用多个机器人平台进行数据收集来涵盖校园上各种环境。一些序列对现有的SLAM算法具有挑战性。第三,我们为将本地化和映射绩效评估提供了基础真理。我们还评估最新的大满贯方法并确定其局限性。该数据集将发布由原始传感器的设置,地面真相,校准数据和评估算法组成:https://ram-lab.com/file/site/site/multi-sensor-dataset。
translated by 谷歌翻译
近年来,压缩图像超分辨率已引起了极大的关注,其中图像被压缩伪像和低分辨率伪影降解。由于复杂的杂化扭曲变形,因此很难通过简单的超分辨率和压缩伪像消除掉的简单合作来恢复扭曲的图像。在本文中,我们向前迈出了一步,提出了层次的SWIN变压器(HST)网络,以恢复低分辨率压缩图像,该图像共同捕获分层特征表示并分别用SWIN Transformer增强每个尺度表示。此外,我们发现具有超分辨率(SR)任务的预处理对于压缩图像超分辨率至关重要。为了探索不同的SR预审查的影响,我们将常用的SR任务(例如,比科比奇和不同的实际超分辨率仿真)作为我们的预处理任务,并揭示了SR在压缩的图像超分辨率中起不可替代的作用。随着HST和预训练的合作,我们的HST在AIM 2022挑战中获得了低质量压缩图像超分辨率轨道的第五名,PSNR为23.51db。广泛的实验和消融研究已经验证了我们提出的方法的有效性。
translated by 谷歌翻译